16 research outputs found

    Declarative Event-Based Workflow as Distributed Dynamic Condition Response Graphs

    Get PDF
    We present Dynamic Condition Response Graphs (DCR Graphs) as a declarative, event-based process model inspired by the workflow language employed by our industrial partner and conservatively generalizing prime event structures. A dynamic condition response graph is a directed graph with nodes representing the events that can happen and arrows representing four relations between events: condition, response, include, and exclude. Distributed DCR Graphs is then obtained by assigning roles to events and principals. We give a graphical notation inspired by related work by van der Aalst et al. We exemplify the use of distributed DCR Graphs on a simple workflow taken from a field study at a Danish hospital, pointing out their flexibility compared to imperative workflow models. Finally we provide a mapping from DCR Graphs to Buchi-automata.Comment: In Proceedings PLACES 2010, arXiv:1110.385

    Generalizing predicates with string arguments

    No full text
    Abstract. The least general generalization (LGG) of strings may cause an overgeneralization in the generalization process of the clauses of predicates with string arguments. We propose a specific generalization (SG) for strings to reduce overgeneralization. SGs of strings are used in the generalization of a set of strings representing the arguments of a set of positive examples of a predicate with string arguments. In order to create a SG of two strings, first, a unique match sequence between these strings is found. A unique match sequence of two strings consists of similarities and differences to represent similar parts and differing parts between those strings. The differences in the unique match sequence are replaced to create a SG of those strings. In the generalization process, a coverage algorithm based on SGs of strings or learning heuristics based on match sequences are used

    Spatio-Temporal Querying In Video Databases

    No full text
    A video data model that supports spatio-temporal querying in videos is presented. The data model is focused on the semantic content of video streams. Objects, events, activities, and spatial properties of objects are main interests of the model. The data model enables the user to query fuzzy spatio-temporal relationships between video objects and also trajectories of moving objects. A prototype of the proposed model has been implemented. © 2003 Elsevier Inc. All rights reserved

    Comparison of Cuboid and Tracklet Features for Action Recognition on Surveillance Videos

    No full text
    For recognition of human actions in surveillance videos, action recognition methods in literature are analyzed and coherent feature extraction methods that are promising for success in such videos are identified. Based on local methods, most popular two feature extraction methods (Dollar's "cuboid" feature definition and Raptis and Soatto's "tracklet" feature definition) are tested and compared. Both methods were classified by different methods in their original applications. In order to obtain a more fair comparison both methods are classified by using the same classification method. In addition, as it is more realistic for recognition of real videos, two most popular datasets KTH and Weizmann are classified by splitting method. According to the test results, convenience of tracklet features over other methods for action recognition in real surveillance videos is proven to be successful

    Natural Language Interface on a Video Data Model

    No full text
    Depending on a content-based spatio-temporal video data model, a natural language interface is implemented to query the video data. The queries, which are given as English sentences, are parsed using Link Parser, and the semantic representations of given queries are extracted from their syntactic structures using information extraction techniques. At the last step, the extracted semantic representations are used to call the related parts of the underlying spatio-temporal video data model to get the results of the queries
    corecore